Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
13th International Conference on Cloud Computing, Data Science and Engineering, Confluence 2023 ; : 250-255, 2023.
Article in English | Scopus | ID: covidwho-2277115

ABSTRACT

Pneumonia has been a concerning issue worldwide. This infectious disease has a higher mortality rate than Covid-19. More than two million individuals lost their lives in 2019 out of which almost 600,000 were infants less than 5 years of age. Globally, identification of the disease is done manually by radiologists, but this method is highly unreliable as its accuracy is not sufficiently good. With the evolution of computational resources, especially the computing power of GPUs, it has become possible to train very deep CNNs. This study involves a comparative analysis of neural networks for pneumonia recognition. The goal is to do binary image classification for pneumonia recognition on each of the three models, namely, a Sequential model using TensorFlow (built from scratch), ResNet50 and InceptionV3 and comparing their efficiency, to discover which model suits best for smaller datasets and which suits best for larger datasets. Dataset consists of 5856 anterior and posterior Chest X-Ray images labeled as either Normal or Pneumonic. © 2023 IEEE.

2.
3rd International Conference on Data Science and Applications, ICDSA 2022 ; 552:301-312, 2023.
Article in English | Scopus | ID: covidwho-2268370

ABSTRACT

With the pandemic worldwide due to COVID-19, several detections and diagnostic methods have been in place. One of the standard modes of detection is computed tomography imaging. With the availability of computing resources and powerful GPUs, the analyses of extensive image data have been possible. Our proposed work initially deals with the classification of CT images as normal and infected images, and later, from the infected data, the images are classified based on their severity. The proposed work uses a 3D convolution neural network model to extract all the relevant features from the CT scan images. The results are also compared with the existing state-of-the-art algorithms. The proposed work is evaluated in accuracy, precision, recall, kappa value, and Intersection over Union. The model achieved an overall accuracy of 94.234% and a kappa value of 0.894. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

3.
1st Combined International Workshop on Interactive Urgent Supercomputing, CIW-IUS 2022 ; : 1-9, 2022.
Article in English | Scopus | ID: covidwho-2265990

ABSTRACT

The COVID-19 pandemic has presented a clear and present need for urgent decision making. Set in an environment of uncertain and unreliable data, and a diverse range of possible interventions, there is an obvious need for integrating HPC into workflows that include model calibration, and the exploration of the decision space. In this paper, we present the design of PanSim, a portable, performant, and productive agent-based simulator, which has been extensively used to model and forecast the pandemic in Hungary. We show its performance and scalability on CPUs and GPUs, then we discuss the workflows PanSim integrates into. We describe the heterogeneous, resource-constrained HPC environment available to us, and formulate a scheduling optimisation problem, as well as heuristics to solve them, to either minimise the execution time of a given number of simulations or to maximise the number of simulations executed in a given time frame. © 2022 IEEE.

4.
Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment ; 1046, 2023.
Article in English | Scopus | ID: covidwho-2241361

ABSTRACT

The Alpha Magnetic Spectrometer (AMS) is constantly exposed to harsh condition on the ISS. As such, there is a need to constantly monitor and perform adjustments to ensure the AMS operates safely and efficiently. With the addition of the Upgraded Tracker Thermal Pump System, the legacy monitoring interface was no longer suitable for use. This paper describes the new AMS Monitoring Interface (AMI). The AMI is built with state-of-the-art time series database and analytics software. It uses a custom feeder program to process AMS Raw Data as time series data points, feeds them into InfluxDB databases, and uses Grafana as a visualization tool. It follows modern design principles, allowing client CPUs to handle the processing work, distributed creation of AMI dashboards, and up-to-date security protocols. In addition, it offers a more simple way of modifying the AMI and allows the use of APIs to automate backup and synchronization. The new AMI has been in use since January 2020 and was a crucial component in remote shift taking during the COVID-19 pandemic. © 2022 Elsevier B.V.

5.
IEEE Transactions on Parallel and Distributed Systems ; : 2015/01/01 00:00:00.000, 2023.
Article in English | Scopus | ID: covidwho-2232135

ABSTRACT

Simulation-based Inference (SBI) is a widely used set of algorithms to learn the parameters of complex scientific simulation models. While primarily run on CPUs in High-Performance Compute clusters, these algorithms have been shown to scale in performance when developed to be run on massively parallel architectures such as GPUs. While parallelizing existing SBI algorithms provides us with performance gains, this might not be the most efficient way to utilize the achieved parallelism. This work proposes a new parallelism-aware adaptation of an existing SBI method, namely approximate Bayesian computation with Sequential Monte Carlo(ABC-SMC). This new adaptation is designed to utilize the parallelism not only for performance gain, but also toward qualitative benefits in the learnt parameters. The key idea is to replace the notion of a single ‘step-size’hyperparameter, which governs how the state space of parameters is explored during learning, with step-sizes sampled from a tuned Beta distribution. This allows this new ABC-SMC algorithm to more efficiently explore the state-space of the parameters being learned. We test the effectiveness of the proposed algorithm to learn parameters for an epidemiology model running on a Tesla T4 GPU. Compared to the parallelized state-of-the-art SBI algorithm, we get similar quality results in <inline-formula><tex-math notation="LaTeX">$\sim 100 \times$</tex-math></inline-formula> fewer simulations and observe <inline-formula><tex-math notation="LaTeX">$\sim 80 \times$</tex-math></inline-formula> lower run-to-run variance across 10 independent trials. IEEE

6.
36th IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2022 ; : 196-205, 2022.
Article in English | Scopus | ID: covidwho-2018897

ABSTRACT

Selective sweep detection carries theoretical significance and has several practical implications, from explaining the adaptive evolution of a species in an environment to understanding the emergence of viruses from animals, such as SARS-CoV-2, and their transmission from human to human. The plethora of available genomic data for population genetic analyses, however, poses various computational challenges to existing methods and tools, leading to prohibitively long analysis times. In this work, we accelerate LD (Linkage Disequilibrium) - based selective sweep detection using GPUs and FPGAs on personal computers and datacenter infrastructures. LD has been previously efficiently accelerated with both GPUs and FPGAs. However, LD alone cannot serve as an indicator of selective sweeps. Here, we complement previous research with dedicated accelerators for the ω statistic, which is a direct indicator of a selective sweep. We evaluate performance of our accelerator solutions for computing the w statistic and for a complete sweep detection method, as implemented by the open-source software OmegaPlus. In comparison with a single CPU core, the FPGA accelerator delivers up to 57.1× and 61.7× faster computation of the ω statistic and the complete sweep detection analysis, respectively. The respective attained speedups by the GPU-accelerated version of OmegaPlus are 2.9× and 12.9×. The GPU-accelerated implementation is available for download here: https://github.com/MrKzn/omegaplus.git. © 2022 IEEE.

7.
22nd International Conference on Computational Science and Its Applications, ICCSA 2022 ; 13375 LNCS:412-427, 2022.
Article in English | Scopus | ID: covidwho-1971559

ABSTRACT

The coronavirus outbreak became a major concern for society worldwide. Technological innovation and ingenuity are essential to fight COVID-19 pandemic and bring us one step closer to overcome it. Researchers over the world are working actively to find available alternatives in different fields, such as the Healthcare System, pharmaceutic, health prevention, among others. With the rise of artificial intelligence (AI) in the last 10 years, IA-based applications have become the prevalent solution in different areas because of its higher capability, being now adopted to help combat against COVID-19. This work provides a fast detection system of COVID-19 characteristics in X-Ray images based on deep learning (DL) techniques. This system is available as a free web deployed service for fast patient classification, alleviating the high demand for standards method for COVID-19 diagnosis. It is constituted of two deep learning models, one to differentiate between X-Ray and non-X-Ray images based on Mobile-Net architecture, and another one to identify chest X-Ray images with characteristics of COVID-19 based on the DenseNet architecture. For real-time inference, it is provided a pair of dedicated GPUs, which reduce the computational time. The whole system can filter out non-chest X-Ray images, and detect whether the X-Ray presents characteristics of COVID-19, highlighting the most sensitive regions. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

8.
8th International Conference on Artificial Intelligence and Security , ICAIS 2022 ; 1586 CCIS:306-316, 2022.
Article in English | Scopus | ID: covidwho-1971397

ABSTRACT

With the development of Deep Learning, image recognition technology has been applied in many aspects. And convolutional neural networks have played a key role in realizing image recognition under the increasing computing power and massive data. However, if developers want to implement the training of convolutional neural networks and achieve the subsequent applications in scenarios such as personal computers, IoT devices, and embedded platforms with low Graphics Processing Units(GPUs) memory, a large number of parameters during training of convolutional neural networks is a great challenge. Therefore, this paper uses depthwise separable convolution to optimize the classic convolutional neural network model VGG-16 to solve this problem. And the VGG-16-JS model is proposed using the Inception structure dimensionality reduction and depthwise separable convolution on the VGG-16 convolutional neural network model. Finally, this paper compares the classification success rates of VGG-16 and VGG-16-JS for the application scenario of the COVID-19 mask-wearing. A series of reliable experimental data show that the improved VGG-16-JS model significantly reduces the number of parameters required for model training without a significant drop in the success rate. It solves the GPU memory requirements for training neural networks to a certain extent. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

9.
IEEE Transactions on Emerging Topics in Computing ; : 1-12, 2022.
Article in English | Scopus | ID: covidwho-1961439

ABSTRACT

The social and economic impact of the COVID-19 pandemic demands a reduction of the time required to find a therapeutic cure. In this paper, we describe the EXSCALATE molecular docking platform capable to scale on an entire modern supercomputer for supporting extreme-scale virtual screening campaigns. Such virtual experiments can provide in short time information on which molecules to consider in the next stages of the drug discovery pipeline, and it is a key asset in case of a pandemic. The EXSCALATE platform has been designed to benefit from heterogeneous computation nodes and to reduce scaling issues. In particular, we maximized the accelerators’usage, minimized the communications between nodes, and aggregated the I/O requests to serve them more efficiently. Moreover, we balanced the computation across the nodes by designing an ad-hoc workflow based on the execution time prediction of each molecule. We deployed the platform on two HPC supercomputers, with a combined computational power of 81 PFLOPS, to evaluate the interaction between 70 billion of small molecules and 15 binding-sites of 12 viral proteins of SARS-CoV-2. The experiment lasted 60 hours and it performed more than one trillion ligand-pocket evaluations, setting a new record on the virtual screening scale. IEEE

10.
Mobile Information Systems ; 2022, 2022.
Article in English | Scopus | ID: covidwho-1950372

ABSTRACT

Coronavirus is a large family of viruses that affects humans and damages respiratory functions ranging from cold to more serious diseases such as ARDS and SARS. But the most recently discovered virus causes COVID-19. Isolation at home or hospital depends on one's health history and conditions. The prevailing disease that might get instigated due to the existence of the virus might lead to deterioration in health. Therefore, there is a need for early detection of the virus. Recently, many works are found to be observed with the deployment of techniques for the detection based on chest X-rays. In this work, a solution has been proposed that consists of a sample prototype of an AI-based Flask-driven web application framework that predicts the six different diseases including ARDS, bacteria, COVID-19, SARS, Streptococcus, and virus. Here, each category of X-ray images was placed under scrutiny and conducted training and testing using deep learning algorithms such as CNN, ResNet (with and without dropout), VGG16, and AlexNet to detect the status of X-rays. Recent FPGA design tools are compatible with software models in deep learning methods. FPGAs are suitable for deep learning algorithms to make the design as flexible, innovative, and hardware acceleration perspective. High-performance FPGA hardware is advantageous over GPUs. Looking forward, the device can efficiently integrate with the deep learning modules. FPGAs act as a challenging substitute podium where it bridges the gap between the architectures and power-related designs. FPGA is a better option for the implementation of algorithms. The design attains 121μW power and 89 ms delay. This was implemented in the FPGA environment and observed that it attains a reduced number of gate counts and low power. © 2022 Anupama Namburu et al.

11.
8th EAI International Conference on Industrial Networks and Intelligent Systems, INISCOM 2022 ; 444 LNICST:107-124, 2022.
Article in English | Scopus | ID: covidwho-1919727

ABSTRACT

In the modern age, the growth of embedded devices, IoT (Internet of Things), 5G (Fifth Generation) and AI (Artificial Intelligence), has driven edge AI applications. Adopting Edge computing for AI applications intends to deal with power consumption, network capacity, response latency issues. In this paper, we introduce an intelligent edge system. It aims to assist with managing and developing microservices based AI applications on embedded computers with limited hardware resource. The proposed system uses Docker/Containerd and lightweight Kubernetes cluster (K3s) for high availability, self-healing, load balancing, scaling and automated deployment. It also facilitates GPU (Graphics Processing Unit) to speed up AI applications. The centralized cluster management and monitoring features simplify clusters and services administration, especially on a large scale. Meanwhile, container registry and DevOps platform with built-in code repository and CI/CD (Continuous Integration/Continuous Delivery) offer continuous integration and delivery for AI applications running on the cluster. This improves the process of AI applications development and management at the edge. In this experience, we implement the face mask recognition application with the proposed system. This application engages the state-of-the-art and lightweight object detection models with deep learning, observing mask violations to contribute to reducing the spread of COVID-19 disease. © 2022, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.

12.
International Journal of Parallel, Emergent and Distributed Systems ; 2022.
Article in English | Scopus | ID: covidwho-1900955

ABSTRACT

Field programmable gate arrays (FPGAs) have become widely prevalent in recent years as a great alternative to application-specific integrated circuits (ASIC) and as a potentially cheap alternative to expensive graphics processing units (GPUs). Introduced as a prototyping solution for ASIC, FPGAs are now widely popular in applications such as artificial intelligence (AI) and machine learning (ML) models that require processing data rapidly. As a relatively low-cost option to GPUs, FPGAs have the advantage of being reprogrammed to be used in almost any data-driven application. In this work, we propose an easily scalable and cost-effective cluster-based co-processing system using FPGAs for ML and AI applications that is easily reconfigured to the requirements of each user application. The aim is to introduce a clustering system of FPGA boards to improve the efficiency of the training component of machine learning algorithms. Our proposed configuration provides an opportunity to utilise relatively inexpensive FPGA development boards to produce a cluster without expert knowledge in VHDL, Verilog, or the system designs related to FPGA development. Consisting of two parts–a computer-based host application to control the cluster and an FPGA cluster connected through a high-speed Ethernet switch, allows the users to customise and adapt the system without much effort. The methods proposed in this paper provide the ability to utilise any FPGA board with an Ethernet port to be used as a part of the cluster and unboundedly scaled. To demonstrate the effectiveness of the proposed work, a two-part experiment to demonstrate the flexibility and portability of the proposed work–a homogeneous and heterogeneous cluster, was conducted with results compared against a desktop computer and combinations of FPGAs in two clusters. Data sets ranging from 60,000 to 14 million, including stroke prediction and covid-19, were used in conducting the experiments. Results suggest that the proposed system in this work performs close to 70% faster than a traditional computer with similar accuracy rates. © 2022 Informa UK Limited, trading as Taylor & Francis Group.

13.
ACM Journal on Emerging Technologies in Computing Systems ; 18(2), 2022.
Article in English | Scopus | ID: covidwho-1846548

ABSTRACT

Epidemiology models are central to understanding and controlling large-scale pandemics. Several epidemiology models require simulation-based inference such as Approximate Bayesian Computation (ABC) to fit their parameters to observations. ABC inference is highly amenable to efficient hardware acceleration. In this work, we develop parallel ABC inference of a stochastic epidemiology model for COVID-19. The statistical inference framework is implemented and compared on Intel's Xeon CPU, NVIDIA's Tesla V100 GPU, Google's V2 Tensor Processing Unit (TPU), and the Graphcore's Mk1 Intelligence Processing Unit (IPU), and the results are discussed in the context of their computational architectures. Results show that TPUs are 3×, GPUs are 4×, and IPUs are 30× faster than Xeon CPUs. Extensive performance analysis indicates that the difference between IPU and GPU can be attributed to higher communication bandwidth, closeness of memory to compute, and higher compute power in the IPU. The proposed framework scales across 16 IPUs, with scaling overhead not exceeding 8% for the experiments performed. We present an example of our framework in practice, performing inference on the epidemiology model across three countries and giving a brief overview of the results. © 2022 Association for Computing Machinery.

14.
Computer Applications in Engineering Education ; 2022.
Article in English | Scopus | ID: covidwho-1825891

ABSTRACT

Today, microcontrollers are of paramount importance in various aspects of life. They are used for design in many industrial fields from simple to highly complex devices. With a COVID-19 crisis going on, blending learning is the ideal solution for a post-pandemic society. This paper proposes a blended learning system as a solution to address today's problem in teaching microcontroller courses through collaboration between distance learning with the proposed training toolkit for real work. Implementation of the proposed solution began by constructing an inexpensive training kit (100$), to empower all students, even those in remote rural areas. The distance learning model allows the simulation of the proposed IoT projects electronically anywhere and at any time using the Proteus design suite, which helps students to conduct them before the actual laboratory appointment. Two learning models are programmed in assembly language which is directly related to the internal architecture of the microcontroller and provides access to all the real capabilities of its central processing unit. To get acquainted with all the features offered by the microcontroller integrated circuit, various IoT projects were constructed, each one dedicated to learning its architecture features, important to engineering students. The proposed IoT systems operate with a minimum consuming power that is very important for portable devices. Questionnaire questions for students were formulated to measure the proposed system benefit over three academic years. © 2022 Wiley Periodicals LLC.

15.
3rd International Conference on Communication, Devices and Computing, ICCDC 2021 ; 851:125-133, 2022.
Article in English | Scopus | ID: covidwho-1750655

ABSTRACT

In the ongoing COVID19 situation, one of the most basic yet necessary supplies for any human being is the face mask. Medical stores are facing shortage of face masks and it is also leading to crowding in confined spaces like medical stores hence aggravating the situation. The only solution to this is increasing the sources from where the citizens can get face masks and at the same time avoiding crowding and contact with any other human. The proposed Mask Vending Machine will make this happen. The physical machine that will store and vend the masks will have the Raspberry Pi as the central processing unit and the additional components like the steppers motors and monitor for display will be controlled by the Raspberry Pi. For payment and choice of quantity, an app has been designed. A QR code will be displayed on the monitor of the vending machine which has to be scanned with the app. Once scanned, it will ask the user for the number of masks needed and also facilitate the transaction process. Once successful, the masks will be vended. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

16.
Computers, Materials and Continua ; 72(1):1123-1137, 2022.
Article in English | Scopus | ID: covidwho-1732654

ABSTRACT

The key to preventing the COVID-19 is to diagnose patients quickly and accurately. Studies have shown that using Convolutional Neural Networks (CNN) to analyze chest Computed Tomography (CT) images is helpful for timely COVID-19 diagnosis. However, personal privacy issues, public chest CT data sets are relatively few, which has limited CNN's application to COVID-19 diagnosis. Also, many CNNs have complex structures and massive parameters. Even if equipped with the dedicated Graphics Processing Unit (GPU) for acceleration, it still takes a long time, which is not conductive to widespread application. To solve above problems, this paper proposes a lightweight CNN classification model based on transfer learning. Use the lightweight CNN MobileNetV2 as the backbone of the model to solve the shortage of hardware resources and computing power. In order to alleviate the problem of model overfitting caused by insufficient data set, transfer learning is used to train the model. The study first exploits the weight parameters trained on the ImageNet database to initialize the MobileNetV2 network, and then retrain the model based on the CT image data set provided by Kaggle. Experimental results on a computer equipped only with the Central Processing Unit (CPU) show that it consumes only 1.06 s on average to diagnose a chest CT image. Compared to other lightweight models, the proposed model has a higher classification accuracy and reliability while having a lightweight architecture and few parameters, which can be easily applied to computers without GPU acceleration. Code:github.com/ZhouJie-520/paper-codes. © 2022 Tech Science Press. All rights reserved.

17.
5th Conference on Machine Translation, WMT 2020 ; : 875-880, 2021.
Article in English | Scopus | ID: covidwho-1668616

ABSTRACT

In this paper we describe the systems developed at Ixa for our participation in WMT20 Biomedical shared task in three language pairs, en-eu, en-es and es-en. When defining our approach, we have put the focus on making an efficient use of corpora recently compiled for training Machine Translation (MT) systems to translate Covid-19 related text, as well as reusing previously compiled corpora and developed systems for biomedical or clinical domain. Regarding the techniques used, we base on the findings from our previous works for translating clinical texts into Basque, making use of clinical terminology for adapting the MT systems to the clinical domain. However, after manually inspecting some of the outputs generated by our systems, for most of the submissions we end up using the system trained only with the basic corpus, since the systems including the clinical terminologies generated outputs shorter in length than the corresponding references. Thus, we present simple baselines for translating s between English and Spanish (en/es);while for translating s and terms from English into Basque (en-eu), we concatenate the best en-es system for each kind of text with our es-eu system. We present automatic evaluation results in terms of BLEU scores, and analyse the effect of including clinical terminology on the average sentence length of the generated outputs. Following the recent recommendations for a responsible use of GPUs for NLP research, we include an estimation of the generated CO2 emissions, based on the power consumed for training the MT systems. © 2020 Association for Computational Linguistics

SELECTION OF CITATIONS
SEARCH DETAIL